4 research outputs found

    Deep Learning for the Acceleration of Magnetic Resonance Fingerprinting

    Get PDF
    Magnetic resonance fingerprinting (MRF) is a quantitative imaging technique that can simultaneously measure multiple important tissue properties of the human body. Although MRF has demonstrated improved scan efficiency compared to conventional techniques, further acceleration is still desired for translation into routine clinical practice. The purpose of this work is to accelerate MRF acquisition by developing a new tissue quantification method for MRF that allows accurate quantification with less sampling data. Most existing approaches use the MRF signal evolution at each individual pixel to estimate tissue properties without considering the spatial association among neighboring pixels. In this report, I propose a spatially-constrained quantification method that uses signals at multiple neighboring pixels to better estimate tissue properties at the central pixel. Specifically, I have designed a unique two-step deep learning model to estimate the tissue property (T1 or T2) maps from the observed MRF signals, which is comprised of 1) a feature extraction module to reduce the dimension of signals by extracting a low-dimensional feature vector from the high-dimensional signal evolution and 2) a spatially-constrained quantification module to exploit the spatial information from the extracted feature maps to generate the final tissue property map. A corresponding two-step training strategy has been developed for network training. The proposed method was tested on highly undersampled MRF data acquired from human brains. The experimental results demonstrated that the proposed method can achieve accurate quantification for T1 and T2 relaxation times using only 1/4 time points of the original sequence (i.e., four times of acceleration for MRF acquisition). Furthermore, a rapid 2D MRF technique with a sub-millimeter in-plane resolution was developed using deep-learning-based quantification approach for brain T1 and T2 quantification. Specifically, the 2D acquisition was performed using a FISP sequence and a spiral trajectory with 0.8 mm in-plane resolution. A novel network architecture, i.e., residual channel attention U-Net, was proposed to improve high resolution details in the estimated tissue maps. Quantitative brain imaging was performed with five adults and two pediatric subjects and the performance of the proposed approach was compared to several existing methods in the literature. In vivo measurements with both adult and pediatric subjects show that high quality T1 and T2 mapping with 0.8 mm in-plane resolution was achieved in 7.5 sec per slice. The proposed deep learning method outperformed existing algorithms in tissue quantification with improved accuracy. Compared to the standard U-Net, high resolution details in brain tissues were better preserved by the proposed residual channel attention U-Net. The experiments on pediatric subjects further demonstrated the potential of the proposed technique for fast pediatric neuroimaging. Alongside the reduced data acquisition time, five-fold acceleration in tissue property mapping was also achieved with the proposed method.Master of Scienc

    What's in a Prior? Learned Proximal Networks for Inverse Problems

    Full text link
    Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed. Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling, where they loosely resemble proximal operators. Yet, something essential is lost in employing these purely data-driven approaches: there is no guarantee that a general deep network represents the proximal operator of any function, nor is there any characterization of the function for which the network might provide some approximate proximal. This not only makes guaranteeing convergence of iterative schemes challenging but, more fundamentally, complicates the analysis of what has been learned by these networks about their training data. Herein we provide a framework to develop learned proximal networks (LPN), prove that they provide exact proximal operators for a data-driven nonconvex regularizer, and show how a new training strategy, dubbed proximal matching, provably promotes the recovery of the log-prior of the true data distribution. Such LPN provide general, unsupervised, expressive proximal operators that can be used for general inverse problems with convergence guarantees. We illustrate our results in a series of cases of increasing complexity, demonstrating that these models not only result in state-of-the-art performance, but provide a window into the resulting priors learned from data

    High-resolution image reconstruction for portable ultrasound imaging devices

    Full text link
    Abstract Pursuing better imaging quality and miniaturizing imaging devices are two trends in the current development of ultrasound imaging. While the first one leads to more complex and expensive imaging equipment, poor image quality is a common problem of portable ultrasound imaging systems. In this paper, an image reconstruction method was proposed to break through the imaging quality limitation of portable devices by introducing generative adversarial network (GAN) model into the field of ultrasound image reconstruction. We combined two GAN generator models, the encoder-decoder model and the U-Net model to build a sparse skip connection U-Net (SSC U-Net) to tackle this problem. To produce more realistic output, stabilize the training procedure, and improve spatial resolution in the reconstructed ultrasound images, a new loss function which combines adversarial loss, L1 loss, and differential loss was proposed. Three datasets including 50 pairs of simulation, 40 pairs of phantom, and 72 pairs of in vivo images were used to evaluate the reconstruction performance. Experimental results show that our SSC U-Net is able to reconstruct ultrasound images with improved quality. Compared with U-Net, our SSC U-Net is able to preserve more details in the reconstructed images and improve full width at half maximum (FWHM) of point targets by 3.23%.http://deepblue.lib.umich.edu/bitstream/2027.42/173940/1/13634_2019_Article_649.pd
    corecore